Doubling Reduction Amount of Hash Functions
نویسنده
چکیده
Suppose we are given a particular fixed-length hash function h : 256-bits→ 128-bits. How can we use h to compute a 128-bit strong collision-free hash of a 512-bit input block? We consider several possible ways to extend h to a hash function H : 512-bits→ 128-bits. In the following, we suppose that m is 512-bits long, and we write m = m1m2, where m1 and m2 are 256 bits each. Method 1 Define H(m) = H(m1m2) = h(m1) ⊕ h(m2). Unfortunately, this fails to be either strong or weak collision-free since m′ = m2m1 always collides with m under H (except in the special case that m1 = m2. Method 2 Define H(m) = H(m1m2) = h(h(m1)h(m2)).
منابع مشابه
On the Impossibility of Efficiently Combining Collision Resistant Hash Functions
Let H1, H2 be two hash functions. We wish to construct a new hash function H that is collision resistant if at least one of H1 or H2 is collision resistant. Concatenating the output of H1 and H2 clearly works, but at the cost of doubling the hash output size. We ask whether a better construction exists, namely, can we hedge our bets without doubling the size of the output? We take a step toward...
متن کاملA Practical Attack against Knapsack based Hash Functions (Extended Abstract)
In this paper, we show that lattice reduction is a very powerful tool to nd collision in knapsack based compression-functions and hash-functions. In particular, it can be used to break the knapsack based hash-function that was introduced by Damgard 3]
متن کاملConstructing Optimal XOR-Functions to Minimize Cache Conflict Misses
Stringent power and performance constraints, coupled with detailed knowledge of the target applications of a processor, allows for application-specific processor optimizations. It has been shown that application-specific reconfigurable hash functions eliminate a large number of cache conflict misses. These hash functions minimize conflicts by modifying the mapping of cache blocks to cache sets....
متن کامل1 A Framework for Iterative Hash Functions — HAIFA ∗
For years hash functions were built from compression functions using the Merkle-Damg̊ard construction. Recently, several flaws in this construction were identified, allowing for pre-image attacks and second preimage attacks on such hash functions even when the underlying compression functions are secure. In this paper we propose the HAsh Iterative FrAmework (HAIFA). Our framework can fix many of...
متن کاملAn Improved Hash Function Based on the Tillich-Zémor Hash Function
Using the idea behind the Tillich-Zémor hash function, we propose a new hash function. Our hash function is parallelizable and its collision resistance is implied by a hardness assumption on a mathematical problem. Also, it is secure against the known attacks. It is the most secure variant of the Tillich-Zémor hash function until now.
متن کامل